504 research outputs found

    CVRetrieval: Separating Consistency Retrieval from Consistency Maintenance

    Get PDF
    In distributed online collaboration applications, such as digital white board and online gaming, it is important to guarantee the consistency among participants’ views to make collaboration meaningful. However, maintaining even a relaxed consistency in a distributed environment with a large number of geographically dispersed participants still involves formidable communication and management cost among them. In this paper, we propose CVRetrieval (Consistency View Retrieval) to solve this scalability problem. Based on the observation that not all participants are equally active or engaged in distributed online collaboration applications, CVRetrieval differentiates the notions of consistency maintenance and consistency retrieval. Here, consistency maintenance implies a protocol that periodically communicates with all participants to maintain a certain consistency level; and consistency retrieval means that passive participants (those with little updating activity) explicitly request a consistent view from the system when the need arises in stead of joining the expensive consistency maintenance protocol all the time. The rationale is that, if a participant does not have updating activities, it is much more cost-effective to satisfy his or her needs on-demand. The evaluation of CVRetrieval is done in two parts. First, we theoretically analyze the scalability of CVRetrieval and compare it to other consistency maintenance protocols. The analytical result shows that CVRetrieval can greatly reduce communication cost and hence make consistency control more scalable. Second, a prototype of CVRetrieval is developed and deployed on the Planet-Lab test-bed to evaluate its performance. The results show that the active participants experience a short response time at some expense of the passive participants that may encounter a longer response time depends on the system setting. Overall, the retrieval performance is still reasonably high

    CVRetrieval: Separating Consistency Retrieval from Consistency Maintenance

    Get PDF
    In distributed online collaboration applications, such as digital white board and online gaming, it is important to guarantee the consistency among participants’ views to make collaboration meaningful. However, maintaining even a relaxed consistency in a distributed environment with a large number of geographically dispersed participants still involves formidable communication and management cost among them. In this paper, we propose CVRetrieval (Consistency View Retrieval) to solve this scalability problem. Based on the observation that not all participants are equally active or engaged in distributed online collaboration applications, CVRetrieval differentiates the notions of consistency maintenance and consistency retrieval. Here, consistency maintenance implies a protocol that periodically communicates with all participants to maintain a certain consistency level; and consistency retrieval means that passive participants (those with little updating activity) explicitly request a consistent view from the system when the need arises in stead of joining the expensive consistency maintenance protocol all the time. The rationale is that, if a participant does not have updating activities, it is much more cost-effective to satisfy his or her needs on-demand. The evaluation of CVRetrieval is done in two parts. First, we theoretically analyze the scalability of CVRetrieval and compare it to other consistency maintenance protocols. The analytical result shows that CVRetrieval can greatly reduce communication cost and hence make consistency control more scalable. Second, a prototype of CVRetrieval is developed and deployed on the Planet-Lab test-bed to evaluate its performance. The results show that the active participants experience a short response time at some expense of the passive participants that may encounter a longer response time depends on the system setting. Overall, the retrieval performance is still reasonably high

    Web service search: who, when, what, and how

    Get PDF
    Web service search is an important problem in service oriented architecture that has attracted widespread attention from academia as well as industry. Web service searching can be performed by various stakeholders, in different situations, using different forms of queries. All those combinations result in radically different ways of implementation. Using a real world web service composition example, this paper describes when, what, and how to search web services from service assemblers’ point of view, where the semantics of web services are not explicitly described. This example outlines the approach to implement a web service broker that can recommend useful services to service assemblers

    Adaptive Consistency Guarantees for Large-Scale Replicated Services

    Full text link
    To maintain consistency, designers of replicated services have traditionally been forced to choose from either strong consistency guarantees or none at all. Realizing that a continuum between strong and optimistic consistencies is semantically meaningful for a broad range of network services, previous research has proposed a continuous consistency model for replicated services to support the tradeoff between the guaranteed consistency level, performance and availability. However, to meet changing application needs and to make the model useful for interactive users of large-scale replicated services, the adaptability and the swiftness of inconsistency resolution are important and challenging. This paper presents IDEA (an Infrastructure for DEtection-based Adaptive consistency guarantees) for adaptive consistency guarantees of large-scale, Internet-based replicated services. The main functions enabled by IDEA include quick inconsistency detection and resolution, consistency adaptation and quantified consistency level guarantees. Through experimentation on the Planet-Lab, IDEA is evaluated from two aspects: its adaptive consistency guarantees and its performance for inconsistency resolution. Results show that IDEA is able to provide consistency guarantees adaptive to user’s changing needs, and it achieves low delay for inconsistency resolution and incurs small communication overhead

    IDEA: An Infrastructure for Detection-based Adaptive Consistency Control in Replicated Services

    Get PDF
    In Internet-scale distributed systems, replication-based scheme has been widely deployed to increase the availability and efficiency of services. Hence, consistency maintenance among replicas becomes an important research issue because poor consistency results in poor QoS or even monetary loss. Recent research in this area focuses on enforcing a certain consistency level, instead of perfect consistency, to strike a balance between consistency guarantee and system’s scalability. In this paper, we argue that, besides balancing consistency and scalability, it is equally, if not more, important to achieve adaptability of consistency maintenance. I.e., the system adjusts its consistency level on the fly to suit applications’ ongoing need. This paper then presents the design, implementation, and evaluation of IDEA (an Infrastructure for DEtection-based Adaptive consistency control), which adaptively controls consistency in replicated services by utilizing an inconsistency detection framework that detects inconsistency among nodes in a timely manner. Besides, IDEA achieves high performance of inconsistency resolution in terms of resolution delay. Through two emulated distribution application on Planet-Lab, IDEA is evaluated from two aspects: its adaptive interface and its performance of inconsistency resolution. According the experimentation, IDEA achieves adaptability by adjusting the consistency level according to users’ preference on-demand. As for performance, IDEA achieves low inconsistency resolution delay and communication cost

    Diversified Texture Synthesis with Feed-forward Networks

    Full text link
    Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.Comment: accepted by CVPR201

    Research on Privacy Paradox in Social Networks Based on Evolutionary Game Theory and Data Mining

    Get PDF
    In order to obtain social benefits, social networks have started taking benefits from private information of network users. While having increased concerns about the risk of privacy disclosure, users still generally disclosed under high privacy concerns, which directly formed the privacy paradox. The expansion and generalization of privacy paradox indicate that the implementation of privacy protection in social networks is still in a dilemma. Studying and solving the problem of privacy paradox is conducive to ensure the healthy development of social network industry. Based on this, this study has designed a research system that analyzes the privacy paradox of social networks from three dimensions: cause, existence and form. After studying existing research of privacy paradox in social networks, evolutionary game theory is determined to be introduced into the procedure of cause analysis, while data mining is used as a data analysis method for empirical research. Within the whole research process, the evolutionary game model of privacy paradox in social networks is built up first, while the necessary conditions for the generation of privacy paradox is addressed, which is derived from the evolutionary stable strategy. Secondly, the questionnaire survey method is used to collect private data of active users of both Weibo and WeChat. Lastly, Apriori and CHAID algorithm are used to determine the relationship of user privacy concerns, privacy behavior, and other factors, which then confirms the existence of privacy paradox on two social networks and makes a comparison between their forms of privacy paradox in specific. This research systematically makes a useful an in-depth analysis to the privacy paradox in social networks and is meaningful for establishing a hierarchical protection system of users\u27 privacy for enterprises
    • …
    corecore